Bernoulli Distribution
   HOME

TheInfoList



OR:

In
probability theory Probability theory is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set ...
and statistics, the Bernoulli distribution, named after Swiss mathematician
Jacob Bernoulli Jacob Bernoulli (also known as James or Jacques; – 16 August 1705) was one of the many prominent mathematicians in the Bernoulli family. He was an early proponent of Leibnizian calculus and sided with Gottfried Wilhelm Leibniz during the Le ...
,James Victor Uspensky: ''Introduction to Mathematical Probability'', McGraw-Hill, New York 1937, page 45 is the
discrete probability distribution In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mathematical description of a random phenomenon ...
of a random variable which takes the value 1 with probability p and the value 0 with probability q = 1-p. Less formally, it can be thought of as a model for the set of possible outcomes of any single
experiment An experiment is a procedure carried out to support or refute a hypothesis, or determine the efficacy or likelihood of something previously untried. Experiments provide insight into Causality, cause-and-effect by demonstrating what outcome oc ...
that asks a
yes–no question In linguistics, a yes–no question, also known as a binary question, a polar question, or a general question is a question whose expected answer is one of two choices, one that provides an affirmative answer to the question versus one that provid ...
. Such questions lead to outcomes that are boolean-valued: a single
bit The bit is the most basic unit of information in computing and digital communications. The name is a portmanteau of binary digit. The bit represents a logical state with one of two possible values. These values are most commonly represente ...
whose value is success/ yes/
true True most commonly refers to truth, the state of being in congruence with fact or reality. True may also refer to: Places * True, West Virginia, an unincorporated community in the United States * True, Wisconsin, a town in the United States * ...
/ one with
probability Probability is the branch of mathematics concerning numerical descriptions of how likely an event is to occur, or how likely it is that a proposition is true. The probability of an event is a number between 0 and 1, where, roughly speakin ...
''p'' and failure/no/ false/
zero 0 (zero) is a number representing an empty quantity. In place-value notation such as the Hindu–Arabic numeral system, 0 also serves as a placeholder numerical digit, which works by multiplying digits to the left of 0 by the radix, usual ...
with probability ''q''. It can be used to represent a (possibly biased)
coin toss A coin is a small, flat (usually depending on the country or value), round piece of metal or plastic used primarily as a medium of exchange or legal tender. They are standardized in weight, and produced in large quantities at a mint in order to ...
where 1 and 0 would represent "heads" and "tails", respectively, and ''p'' would be the probability of the coin landing on heads (or vice versa where 1 would represent tails and ''p'' would be the probability of tails). In particular, unfair coins would have p \neq 1/2. The Bernoulli distribution is a special case of the binomial distribution where a single trial is conducted (so ''n'' would be 1 for such a binomial distribution). It is also a special case of the two-point distribution, for which the possible outcomes need not be 0 and 1.


Properties

If X is a random variable with this distribution, then: :\Pr(X=1) = p = 1 - \Pr(X=0) = 1 - q. The probability mass function f of this distribution, over possible outcomes ''k'', is : f(k;p) = \begin p & \textk=1, \\ q = 1-p & \text k = 0. \end This can also be expressed as :f(k;p) = p^k (1-p)^ \quad \text k\in\ or as :f(k;p)=pk+(1-p)(1-k) \quad \text k\in\. The Bernoulli distribution is a special case of the binomial distribution with n = 1. The
kurtosis In probability theory and statistics, kurtosis (from el, κυρτός, ''kyrtos'' or ''kurtos'', meaning "curved, arching") is a measure of the "tailedness" of the probability distribution of a real-valued random variable. Like skewness, kurt ...
goes to infinity for high and low values of p, but for p=1/2 the two-point distributions including the Bernoulli distribution have a lower
excess kurtosis In probability theory and statistics, kurtosis (from el, κυρτός, ''kyrtos'' or ''kurtos'', meaning "curved, arching") is a measure of the "tailedness" of the probability distribution of a real-valued random variable. Like skewness, kurtosi ...
than any other probability distribution, namely −2. The Bernoulli distributions for 0 \le p \le 1 form an
exponential family In probability and statistics, an exponential family is a parametric set of probability distributions of a certain form, specified below. This special form is chosen for mathematical convenience, including the enabling of the user to calculate ...
. The
maximum likelihood estimator In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed statis ...
of p based on a random sample is the
sample mean The sample mean (or "empirical mean") and the sample covariance are statistics computed from a sample of data on one or more random variables. The sample mean is the average value (or mean value) of a sample of numbers taken from a larger popu ...
.


Mean

The expected value of a Bernoulli random variable X is :\operatorname p This is due to the fact that for a Bernoulli distributed random variable X with \Pr(X=1)=p and \Pr(X=0)=q we find :\operatorname = \Pr(X=1)\cdot 1 + \Pr(X=0)\cdot 0 = p \cdot 1 + q\cdot 0 = p.


Variance

The
variance In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbe ...
of a Bernoulli distributed X is :\operatorname = pq = p(1-p) We first find :\operatorname ^2= \Pr(X=1)\cdot 1^2 + \Pr(X=0)\cdot 0^2 = p \cdot 1^2 + q\cdot 0^2 = p = \operatorname From this follows :\operatorname = \operatorname ^2\operatorname 2 = \operatorname \operatorname 2 = p-p^2 = p(1-p) = pq With this result it is easy to prove that, for any Bernoulli distribution, its variance will have a value inside ,1/4/math>.


Skewness

The
skewness In probability theory and statistics, skewness is a measure of the asymmetry of the probability distribution of a real-valued random variable about its mean. The skewness value can be positive, zero, negative, or undefined. For a unimodal ...
is \frac=\frac. When we take the standardized Bernoulli distributed random variable \frac we find that this random variable attains \frac with probability p and attains -\frac with probability q. Thus we get :\begin \gamma_1 &= \operatorname \left left(\frac\right)^3\right\\ &= p \cdot \left(\frac\right)^3 + q \cdot \left(-\frac\right)^3 \\ &= \frac \left(pq^3-qp^3\right) \\ &= \frac (q-p) \\ &= \frac. \end


Higher moments and cumulants

The raw moments are all equal due to the fact that 1^k=1 and 0^k=0. :\operatorname ^k= \Pr(X=1)\cdot 1^k + \Pr(X=0)\cdot 0^k = p \cdot 1 + q\cdot 0 = p = \operatorname The central moment of order k is given by : \mu_k =(1-p)(-p)^k +p(1-p)^k. The first six central moments are :\begin \mu_1 &= 0, \\ \mu_2 &= p(1-p), \\ \mu_3 &= p(1-p)(1-2p), \\ \mu_4 &= p(1-p)(1-3p(1-p)), \\ \mu_5 &= p(1-p)(1-2p)(1-2p(1-p)), \\ \mu_6 &= p(1-p)(1-5p(1-p)(1-p(1-p))). \end The higher central moments can be expressed more compactly in terms of \mu_2 and \mu_3 :\begin \mu_4 &= \mu_2 (1-3\mu_2 ), \\ \mu_5 &= \mu_3 (1-2\mu_2 ), \\ \mu_6 &= \mu_2 (1-5\mu_2 (1-\mu_2 )). \end The first six cumulants are :\begin \kappa_1 &= p, \\ \kappa_2 &= \mu_2 , \\ \kappa_3 &= \mu_3 , \\ \kappa_4 &= \mu_2 (1-6\mu_2 ), \\ \kappa_5 &= \mu_3 (1-12\mu_2 ), \\ \kappa_6 &= \mu_2 (1-30\mu_2 (1-4\mu_2 )). \end


Related distributions

*If X_1,\dots,X_n are independent, identically distributed (
i.i.d. In probability theory and statistics, a collection of random variables is independent and identically distributed if each random variable has the same probability distribution as the others and all are mutually independent. This property is us ...
) random variables, all
Bernoulli trial In the theory of probability and statistics, a Bernoulli trial (or binomial trial) is a random experiment with exactly two possible outcomes, "success" and "failure", in which the probability of success is the same every time the experiment is ...
s with success probability ''p'', then their sum is distributed according to a binomial distribution with parameters ''n'' and ''p'': *:\sum_^n X_k \sim \operatorname(n,p) ( binomial distribution). :The Bernoulli distribution is simply \operatorname(1, p), also written as \mathrm (p). *The
categorical distribution In probability theory and statistics, a categorical distribution (also called a generalized Bernoulli distribution, multinoulli distribution) is a discrete probability distribution that describes the possible results of a random variable that can ...
is the generalization of the Bernoulli distribution for variables with any constant number of discrete values. *The
Beta distribution In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval , 1in terms of two positive parameters, denoted by ''alpha'' (''α'') and ''beta'' (''β''), that appear as ...
is the
conjugate prior In Bayesian probability theory, if the posterior distribution p(\theta \mid x) is in the same probability distribution family as the prior probability distribution p(\theta), the prior and posterior are then called conjugate distributions, and th ...
of the Bernoulli distribution. *The
geometric distribution In probability theory and statistics, the geometric distribution is either one of two discrete probability distributions: * The probability distribution of the number ''X'' of Bernoulli trials needed to get one success, supported on the set \; * ...
models the number of independent and identical Bernoulli trials needed to get one success. *If Y \sim \mathrm\left(\frac\right), then 2Y - 1 has a
Rademacher distribution In probability theory and statistics, the Rademacher distribution (which is named after Hans Rademacher) is a discrete probability distribution where a random variate ''X'' has a 50% chance of being +1 and a 50% chance of being -1. A series ( ...
.


See also

* Bernoulli process, a
random process In probability theory and related fields, a stochastic () or random process is a mathematical object usually defined as a family of random variables. Stochastic processes are widely used as mathematical models of systems and phenomena that appe ...
consisting of a sequence of
independent Independent or Independents may refer to: Arts, entertainment, and media Artist groups * Independents (artist group), a group of modernist painters based in the New Hope, Pennsylvania, area of the United States during the early 1930s * Independ ...
Bernoulli trials *
Bernoulli sampling In the theory of finite population sampling, Bernoulli sampling is a sampling process where each element of the population is subjected to an independent Bernoulli trial which determines whether the element becomes part of the sample. An essential p ...
*
Binary entropy function In information theory, the binary entropy function, denoted \operatorname H(p) or \operatorname H_\text(p), is defined as the entropy of a Bernoulli process with probability p of one of two values. It is a special case of \Eta(X), the entropy fun ...
*
Binary decision diagram In computer science, a binary decision diagram (BDD) or branching program is a data structure that is used to represent a Boolean function. On a more abstract level, BDDs can be considered as a compressed representation of sets or relations. ...


References


Further reading

* *


External links

*. * * Interactive graphic
Univariate Distribution Relationships
{{DEFAULTSORT:Bernoulli Distribution Discrete distributions Conjugate prior distributions Exponential family distributions